-
Notifications
You must be signed in to change notification settings - Fork 502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse Large Buffers in MigrateSession #623
base: main
Are you sure you want to change the base?
Conversation
29aa345
to
146da1a
Compare
e13374e
to
9d65b87
Compare
18e9c50
to
d34f522
Compare
2ab3868
to
c362324
Compare
c362324
to
dfc86d7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comments.
@@ -61,8 +69,8 @@ public void Return(PoolEntry buffer) | |||
Interlocked.Decrement(ref pool[level].size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs commenting. My first thought on seeing .size was to wonder why we can't just use the queue.Count method. Now it looks like .size is total number of allocations? The PoolLevel field name comments are uninformative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what the comment is here? LFBP implementation has not change. I just added Purge functionality and GetStats option. There might be a need to redesign the buffer pool but should part of another PR.
{ | ||
#if HANGDETECT | ||
if (++count % 10000 == 0) | ||
logger?.LogTrace("Dispose iteration {count}, {activeHandlerCount}", count, activeHandlerCount); | ||
#endif | ||
Thread.Yield(); | ||
} | ||
for (int i = 0; i < numLevels; i++) | ||
for (var i = 0; i < numLevels; i++) | ||
{ | ||
if (pool[i] == null) continue; | ||
while (pool[i].size > 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm understanding correctly that .size is total # of allocations which may be > maxEntriesPerLevel, then this can spin forever if there are unReturned items. This might warrant an Assert or logging with an exit. (Holding an unReturned item and calling Dispose() (and Purge()?) is bad anyway, so let's make it easier to catch).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR does not change the LBFP implementation. Changes to its core operations should be part of a separate PR.
/// MigrationManager Buffer Pool | ||
/// </summary> | ||
MM, | ||
/// <summary> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not Migration, Replication, ServerSocket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should not use the word socket, as that implies TCP. the same network stack can be used with RDMA etc. Maybe ServerListener
This PR tries to improve memory utilization and reduce fragmentation by reusing large buffers that were allocated across different migrate sessions.
The PR includes the following:
Notes:
There is an upper limit on the number of entries per level in the LimitedBufferPool which may cause fragmentation of the LOH due to the way we allocate and return buffers to the pool itself (shown below)
garnet/libs/common/Memory/LimitedFixedBufferPool.cs
Lines 55 to 61 in 68a7a85
Separate buffer pool from send and receive spec.
Remove unused parts of NetworkSenderBase.
Resize allocation for send/receive of replication code.